我们建立了量子算法设计与电路下限之间的第一一般连接。具体来说,让$ \ mathfrak {c} $是一类多项式大小概念,假设$ \ mathfrak {c} $可以在统一分布下的成员查询,错误$ 1/2 - \ gamma $通过时间$ t $量子算法。我们证明如果$ \ gamma ^ 2 \ cdot t \ ll 2 ^ n / n $,则$ \ mathsf {bqe} \ nsubseteq \ mathfrak {c} $,其中$ \ mathsf {bqe} = \ mathsf {bque} [2 ^ {o(n)}] $是$ \ mathsf {bqp} $的指数时间模拟。在$ \ gamma $和$ t $中,此结果是最佳的,因为它不难学习(经典)时间$ t = 2 ^ n $(没有错误) ,或在Quantum Time $ t = \ mathsf {poly}(n)$以傅立叶采样为单位为1/2美元(2 ^ { - n / 2})$。换句话说,即使对这些通用学习算法的边际改善也会导致复杂性理论的主要后果。我们的证明在学习理论,伪随机性和计算复杂性的几个作品上构建,并且至关重要地,在非凡的经典学习算法与由Oliveira和Santhanam建立的电路下限之间的联系(CCC 2017)。扩展他们对量子学习算法的方法,结果产生了重大挑战。为此,我们展示了伪随机发电机如何以通用方式意味着学习到较低的连接,构建针对均匀量子计算的第一个条件伪随机发生器,并扩展了Impagliazzo,JaiSwal的本地列表解码算法。 ,Kabanets和Wigderson(Sicomp 2010)通过微妙的分析到量子电路。我们认为,这些贡献是独立的兴趣,可能会发现其他申请。
translated by 谷歌翻译
Recently, there has been an interest in improving the resources available in Intrusion Detection System (IDS) techniques. In this sense, several studies related to cybersecurity show that the environment invasions and information kidnapping are increasingly recurrent and complex. The criticality of the business involving operations in an environment using computing resources does not allow the vulnerability of the information. Cybersecurity has taken on a dimension within the universe of indispensable technology in corporations, and the prevention of risks of invasions into the environment is dealt with daily by Security teams. Thus, the main objective of the study was to investigate the Ensemble Learning technique using the Stacking method, supported by the Support Vector Machine (SVM) and k-Nearest Neighbour (kNN) algorithms aiming at an optimization of the results for DDoS attack detection. For this, the Intrusion Detection System concept was used with the application of the Data Mining and Machine Learning Orange tool to obtain better results
translated by 谷歌翻译
Despite being responsible for state-of-the-art results in several computer vision and natural language processing tasks, neural networks have faced harsh criticism due to some of their current shortcomings. One of them is that neural networks are correlation machines prone to model biases within the data instead of focusing on actual useful causal relationships. This problem is particularly serious in application domains affected by aspects such as race, gender, and age. To prevent models from incurring on unfair decision-making, the AI community has concentrated efforts in correcting algorithmic biases, giving rise to the research area now widely known as fairness in AI. In this survey paper, we provide an in-depth overview of the main debiasing methods for fairness-aware neural networks in the context of vision and language research. We propose a novel taxonomy to better organize the literature on debiasing methods for fairness, and we discuss the current challenges, trends, and important future work directions for the interested researcher and practitioner.
translated by 谷歌翻译
最近证明利用稀疏网络连接深神经网络中的连续层,可为大型最新模型提供好处。但是,网络连接性在浅网络的学习曲线中也起着重要作用,例如经典限制的玻尔兹曼机器(RBM)。一个基本问题是有效地找到了改善学习曲线的连接模式。最近的原则方法明确将网络连接作为参数,这些参数必须在模型中进行优化,但通常依靠连续功能来表示连接和明确的惩罚。这项工作提出了一种基于网络梯度的想法来找到RBM的最佳连接模式的方法:计算每个可能连接的梯度,给定特定的连接模式,并使用梯度驱动连续连接强度参数又使用确定连接模式。因此,学习RBM参数和学习网络连接是真正共同执行的,尽管学习率不同,并且没有改变目标函数。该方法应用于MNIST数据集,以显示针对样本生成和输入分类的基准任务找到更好的RBM模型。
translated by 谷歌翻译
TMIC是一种应用程序发明家扩展,用于部署ML模型,以在教育环境中使用Google Tochable Machine开发的图像分类。 Google Thotable Machine是一种直观的视觉工具,可为开发用于图像分类的ML模型提供面向工作流的支持。针对使用Google Tochable Machine开发的模型的使用,扩展TMIC可以作为App Inventor的一部分,以tensorflow.js为tensorflow.js导出的受过训练的模型,这是最受欢迎的基于块的编程环境之一,用于教学计算计算K-12。该扩展名是使用基于扩展图片的App Inventor扩展框架创建的,可在BSD 3许可下获得。它可用于在K-12中,在高等教育的入门课程中或有兴趣创建具有图像分类的智能应用程序的任何人。扩展TMIC是由Initiative Computa \ c {C} \ 〜Ao Na Escola的信息学和统计系的圣卡塔纳纳大学/巴西大学提供的研究工作的一部分,旨在在K-中引入AI教育。 12。
translated by 谷歌翻译
近年来,深度学习算法在地球观察(EO)中的应用使依赖远程感知数据的领域取得了重大进展。但是,鉴于EO中的数据量表,创建具有专家使用像素级注释的大型数据集是昂贵且耗时的。在这种情况下,先验被视为一种有吸引力的方法,可以减轻在训练EO的深度学习方法时手动标签的负担。对于某些应用,这些先验很容易获得。本研究以许多计算机视觉任务中的自我监督特征表示学习的对比学习方法取得了巨大成功的动机,本研究提出了一种使用作物标签比例的在线深度聚类方法,作为研究基于政府作物的样本级别的先验者 - 整个农业地区的比例数据。我们使用来自巴西两个不同农业地区的两个大数据集评估了该方法。广泛的实验表明,该方法对不同的数据类型(合成句子雷达和光学图像)具有鲁棒性,考虑到目标区域中主要的作物类型,报告了更高的精度值。因此,它可以减轻EO应用中大规模图像注释的负担。
translated by 谷歌翻译
本文基于Loeffler离散余弦变换(DCT)算法引入了矩阵参数化方法。结果,提出了一类新的八点DCT近似值,能够统一文献中几个八点DCT近似的数学形式主义。帕累托效率的DCT近似是通过多准则优化获得的,其中考虑了计算复杂性,接近性和编码性能。有效的近似及其缩放的16和32点版本嵌入了图像和视频编码器中,包括类似JPEG的编解码器以及H.264/AVC和H.265/HEVC标准。将结果与未修饰的标准编解码器进行比较。在Xilinx VLX240T FPGA上映射并实现了有效的近似值,并评估了面积,速度和功耗。
translated by 谷歌翻译
本文通过研究阶段转换的$ Q $State Potts模型,通过许多无监督的机器学习技术,即主成分分析(PCA),$ K $ - 梅尔集群,统一歧管近似和投影(UMAP),和拓扑数据分析(TDA)。即使在所有情况下,我们都能够检索正确的临界温度$ t_c(q)$,以$ q = 3,4 $和5 $,结果表明,作为UMAP和TDA的非线性方法依赖于有限尺寸效果,同时仍然能够区分第一和二阶相转换。该研究可以被认为是在研究相转变的调查中使用不同无监督的机器学习算法的基准。
translated by 谷歌翻译
在人工智能中,我们经常寻求确定许多变量的未知目标函数$ y = f(\ mathbf {x})$给出有限的例子$ s = \ {(\ mathbf {x ^ {(i)}} ,y ^ {(i)})\} $ with $ \ mathbf {x ^ {(i)}} \以$ d $是一个感兴趣的域名。我们将$ S $称为培训集和最终任务是识别近似于新$ \ MATHBF {x} $近似于此目标函数的数学模型;使用$ t \ neq s $(即,测试模型泛化),设置$ t = \ {\ mathbf {x ^ {x ^ {x ^ {x ^ {x ^ {x ^ {x ^ {x ^ {x ^ {x但是,对于某些应用,主要兴趣是近似于较大的域名$ d'$的未知函数,该域为$ d $。例如,在涉及设计新结构的情况下,我们可能有兴趣最大化$ F $;因此,源自$ S $的模型也应该在$ d'$以$ y $大于$ s $ m $的值概括为$ d'$。从这种意义上讲,AI系统将提供重要信息,可以指导设计过程,例如,使用学习模型作为设计新实验室实验的代理功能。通过结合添加剂样条模型,我们基于持续分数的迭代配合来介绍一种多变量回归的方法。我们将其与Adaboost,内核,线性回归,Lasso Lars,线性支持向量回归,多层感知,随机林,随机梯度下降和XGBoost等方法进行比较。我们基于物理化学特性预测超导体临界温度的重要问题的性能。
translated by 谷歌翻译
There are multiple scales of abstraction from which we can describe the same image, depending on whether we are focusing on fine-grained details or a more global attribute of the image. In brain mapping, learning to automatically parse images to build representations of both small-scale features (e.g., the presence of cells or blood vessels) and global properties of an image (e.g., which brain region the image comes from) is a crucial and open challenge. However, most existing datasets and benchmarks for neuroanatomy consider only a single downstream task at a time. To bridge this gap, we introduce a new dataset, annotations, and multiple downstream tasks that provide diverse ways to readout information about brain structure and architecture from the same image. Our multi-task neuroimaging benchmark (MTNeuro) is built on volumetric, micrometer-resolution X-ray microtomography images spanning a large thalamocortical section of mouse brain, encompassing multiple cortical and subcortical regions. We generated a number of different prediction challenges and evaluated several supervised and self-supervised models for brain-region prediction and pixel-level semantic segmentation of microstructures. Our experiments not only highlight the rich heterogeneity of this dataset, but also provide insights into how self-supervised approaches can be used to learn representations that capture multiple attributes of a single image and perform well on a variety of downstream tasks. Datasets, code, and pre-trained baseline models are provided at: https://mtneuro.github.io/ .
translated by 谷歌翻译